257 research outputs found

    Children’s Learning of a Semantics-Free Artificial Grammar with Center Embedding

    Get PDF
    Whether non-human animals have an ability to learn and process center embedding, a core property of human language syntax, is still debated. Artificial-grammar learning (AGL) has been used to compare humans and animals in the learning of center embedding. However, up until now, human participants have only included adults, and data on children, who are the key players of natural language acquisition, are lacking. We created a novel game-like experimental paradigm combining the go/no-go procedure often used in animal research with the stepwise learning methods found effective in human adults’ center-embedding learning. Here we report that some children succeeded in learning a semantics-free artificial grammar with center embedding (A2B2 grammar) in the auditory modality. Although their success rate was lower than adults’, the successful children looked as efficient learners as adults. Where children struggled, their memory capacity seemed to have limited their AGL performance

    The Non-Hierarchical Nature of the Chomsky Hierarchy-Driven Artificial-Grammar Learning

    Get PDF
    Recent artificial-grammar learning (AGL) paradigms driven by the Chomsky hierarchy paved the way for direct comparisons between humans and animals in the learning of center embedding ([A[AB]B]). The AnBn grammars used by the first generation of such research lacked a crucial property of center embedding, where the pairs of elements are explicitly matched ([A1 [A2 B2] B1]). This type of indexing is implemented in the second-generation AnBn grammars. This paper reviews recent studies using such grammars. Against the premises of these studies, we argue that even those newer AnBn grammars cannot test the learning of syntactic hierarchy. These studies nonetheless provide detailed information about the conditions under which human adults can learn an AnBn grammar with indexing. This knowledge serves to interpret recent animal studies, which make surprising claims about animals’ ability to handle center embedding

    Synchronized tapping facilitates learning sound sequences as indexed by the P300

    Get PDF
    The purpose of the present study was to determine whether and how single finger tapping in synchrony with sound sequences contributed to the auditory processing of them. The participants learned two unfamiliar sound sequences via different methods. In the tapping condition, they learned an auditory sequence while they tapped in synchrony with each sound onset. In the no tapping condition, they learned another sequence while they kept pressing a key until the sequence ended. After these learning sessions, we presented the two melodies again and recorded event-related potentials (ERPs). During the ERP recordings, 10% of the tones within each melody deviated from the original tones. An analysis of the grand average ERPs showed that deviant stimuli elicited a significant P300 in the tapping but not in the no-tapping condition. In addition, the significance of the P300 effect in the tapping condition increased as the participants showed highly synchronized tapping behavior during the learning sessions. These results indicated that single finger tapping promoted the conscious detection and evaluation of deviants within the learned sequences. The effect was related to individuals’ musical ability to coordinate their finger movements along with external auditory events

    Measuring context dependency in birdsong using artificial neural networks

    Get PDF
    Context dependency is a key feature in sequential structures of human language, which requires reference between words far apart in the produced sequence. Assessing how long the past context has an effect on the current status provides crucial information to understand the mechanism for complex sequential behaviors. Birdsongs serve as a representative model for studying the context dependency in sequential signals produced by non-human animals, while previous reports were upper-bounded by methodological limitations. Here, we newly estimated the context dependency in birdsongs in a more scalable way using a modern neural-network-based language model whose accessible context length is sufficiently long. The detected context dependency was beyond the order of traditional Markovian models of birdsong, but was consistent with previous experimental investigations. We also studied the relation between the assumed/auto-detected vocabulary size of birdsong (i.e., fine- vs. coarse-grained syllable classifications) and the context dependency. It turned out that the larger vocabulary (or the more fine-grained classification) is assumed, the shorter context dependency is detected

    The integration hypothesis of human language evolution and the nature of contemporary languages

    Get PDF
    How human language arose is a mystery in the evolution of Homo sapiens. Miyagawa et al. (2013) put forward a proposal, which we will call the Integration Hypothesis of human language evolution, that holds that human language is composed of two components, E for expressive, and L for lexical. Each component has an antecedent in nature: E as found, for example, in birdsong, and L in, for example, the alarm calls of monkeys. E and L integrated uniquely in humans to give rise to language. A challenge to the Integration Hypothesis is that while these non-human systems are finite-state in nature, human language is known to require characterization by a non-finite state grammar. Our claim is that E and L, taken separately, are in fact finite-state; when a grammatical process crosses the boundary between E and L, it gives rise to the non-finite state character of human language. We provide empirical evidence for the Integration Hypothesis by showing that certain processes found in contemporary languages that have been characterized as non-finite state in nature can in fact be shown to be finite-state. We also speculate on how human language actually arose in evolution through the lens of the Integration Hypothesis

    Sequential learning and rule abstraction in Bengalese finches

    Get PDF
    The Bengalese finch (Lonchura striata var. domestica) is a species of songbird. Males sing courtship songs with complex note-to-note transition rules, while females discriminate these songs when choosing their mate. The present study uses serial reaction time (RT) to examine the characteristics of the Bengalese finches’ sequential behaviours beyond song production. The birds were trained to produce the sequence with an “A–B–A” structure. After the RT to each key position was determined to be stable, we tested the acquisition of the trained sequential response by presenting novel and random three-term sequences (random test). We also examined whether they could abstract the embedded rule in the trained sequence and apply it to the novel test sequence (abstract test). Additionally, we examined rule abstraction through example training by increasing the number of examples in baseline training from 1 to 5. When considered as (gender) groups, training with 5 examples resulted in no statistically significant differences in the abstract tests, while statistically significant differences were observed in the random tests, suggesting that the male birds learned the trained sequences and transferred the abstract structure they had learned during the training trials. Individual data indicated that males, as opposed to females, were likely to learn the motor pattern of the sequence. The results are consistent with observations that males learn to produce songs with complex sequential rules, whereas females do not

    A Bird’s Eye View of Human Language Evolution

    Get PDF
    Comparative studies of linguistic faculties in animals pose an evolutionary paradox: language involves certain perceptual and motor abilities, but it is not clear that this serves as more than an input–output channel for the externalization of language proper. Strikingly, the capability for auditory–vocal learning is not shared with our closest relatives, the apes, but is present in such remotely related groups as songbirds and marine mammals. There is increasing evidence for behavioral, neural, and genetic similarities between speech acquisition and birdsong learning. At the same time, researchers have applied formal linguistic analysis to the vocalizations of both primates and songbirds. What have all these studies taught us about the evolution of language? Is the comparative study of an apparently species-specific trait like language feasible? We argue that comparative analysis remains an important method for the evolutionary reconstruction and causal analysis of the mechanisms underlying language. On the one hand, common descent has been important in the evolution of the brain, such that avian and mammalian brains may be largely homologous, particularly in the case of brain regions involved in auditory perception, vocalization, and auditory memory. On the other hand, there has been convergent evolution of the capacity for auditory–vocal learning, and possibly for structuring of external vocalizations, such that apes lack the abilities that are shared between songbirds and humans. However, significant limitations to this comparative analysis remain. While all birdsong may be classified in terms of a particularly simple kind of concatenation system, the regular languages, there is no compelling evidence to date that birdsong matches the characteristic syntactic complexity of human language, arising from the composition of smaller forms like words and phrases into larger ones
    corecore